Performance analysis of a pipelined backpropagation parallel algorithm

نویسندگان

  • Alain Pétrowski
  • Gérard Dreyfus
  • Claude Girault
چکیده

The supervised training of feedforward neural networks is often based on the error backpropagation algorithm. The authors consider the successive layers of a feedforward neural network as the stages of a pipeline which is used to improve the efficiency of the parallel algorithm. A simple placement rule is used to take advantage of simultaneous executions of the calculations on each layer of the network. The analytic expressions show that the parallelization is efficient. Moreover, they indicate that the performance of this implementation is almost independent of the neural network architecture. Their simplicity assures easy prediction of learning performance on a parallel machine for any neural network architecture. The experimental results are in agreement with analytical estimates.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Parallel Implementation of Backpropagation Neural Network on MasPar MP-1

In this paper, we explore the parallel implementation of the backpropagation algorithm with and without hidden layers on MasPar MP-1. This implementation is based on a SIMD architecture, and uses a backpropagation model. Our implementation uses weight batching versus on-line updating of the weights which is used by most serial and parallel implementations of backpropagation. This method results...

متن کامل

Efficient implementation of low time complexity and pipelined bit-parallel polynomial basis multiplier over binary finite fields

This paper presents two efficient implementations of fast and pipelined bit-parallel polynomial basis multipliers over GF (2m) by irreducible pentanomials and trinomials. The architecture of the first multiplier is based on a parallel and independent computation of powers of the polynomial variable. In the second structure only even powers of the polynomial variable are used. The par...

متن کامل

Hardware-Efficient On-line Learning through Pipelined Truncated-Error Backpropagation in Binary-State Networks

Artificial neural networks (ANNs) trained using backpropagation are powerful learning architectures that have achieved state-of-the-art performance in various benchmarks. Significant effort has been devoted to developing custom silicon devices to accelerate inference in ANNs. Accelerating the training phase, however, has attracted relatively little attention. In this paper, we describe a hardwa...

متن کامل

Mapping Of Backpropagation Learning Onto Distributed Memory Multiprocessors

This paper presents a mapping scheme for p a d e l pipelined execution of the Backpropagation Learning Algorithm o n dtktributed memory multiprocessors (DMMs). The proposed implementation ezhibits training set parallelism that involves batch updating. Simple algorithms have been presented, which allow the data transfer involved in both forward and backward execution3 phases of the backpropagati...

متن کامل

Optimal fast digital error correction method of pipelined analog to digital converter with DLMS algorithm

In this paper, convergence rate of digital error correction algorithm in correction of capacitor mismatch error and finite and nonlinear gain of Op-Amp has increased significantly by the use of DLMS, an evolutionary search algorithm. To this end, a 16-bit pipelined analog to digital converter was modeled. The obtained digital model is a FIR filter with 16 adjustable weights. To adjust weights o...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • IEEE transactions on neural networks

دوره 4 6  شماره 

صفحات  -

تاریخ انتشار 1993